Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 144.168
Filtrar
1.
J Biomed Opt ; 29(Suppl 2): S22702, 2025 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-38434231

RESUMO

Significance: Advancements in label-free microscopy could provide real-time, non-invasive imaging with unique sources of contrast and automated standardized analysis to characterize heterogeneous and dynamic biological processes. These tools would overcome challenges with widely used methods that are destructive (e.g., histology, flow cytometry) or lack cellular resolution (e.g., plate-based assays, whole animal bioluminescence imaging). Aim: This perspective aims to (1) justify the need for label-free microscopy to track heterogeneous cellular functions over time and space within unperturbed systems and (2) recommend improvements regarding instrumentation, image analysis, and image interpretation to address these needs. Approach: Three key research areas (cancer research, autoimmune disease, and tissue and cell engineering) are considered to support the need for label-free microscopy to characterize heterogeneity and dynamics within biological systems. Based on the strengths (e.g., multiple sources of molecular contrast, non-invasive monitoring) and weaknesses (e.g., imaging depth, image interpretation) of several label-free microscopy modalities, improvements for future imaging systems are recommended. Conclusion: Improvements in instrumentation including strategies that increase resolution and imaging speed, standardization and centralization of image analysis tools, and robust data validation and interpretation will expand the applications of label-free microscopy to study heterogeneous and dynamic biological systems.


Assuntos
Técnicas Histológicas , Microscopia , Animais , Citometria de Fluxo , Processamento de Imagem Assistida por Computador
2.
Sci Data ; 11(1): 366, 2024 Apr 11.
Artigo em Inglês | MEDLINE | ID: mdl-38605079

RESUMO

Radiomics features (RFs) studies have showed limitations in the reproducibility of RFs in different acquisition settings. To date, reproducibility studies using CT images mainly rely on phantoms, due to the harness of patient exposure to X-rays. The provided CadAIver dataset has the aims of evaluating how CT scanner parameters effect radiomics features on cadaveric donor. The dataset comprises 112 unique CT acquisitions of a cadaveric truck acquired on 3 different CT scanners varying KV, mA, field-of-view, and reconstruction kernel settings. Technical validation of the CadAIver dataset comprises a comprehensive univariate and multivariate GLM approach to assess stability of each RFs extracted from lumbar vertebrae. The complete dataset is publicly available to be applied for future research in the RFs field, and could foster the creation of a collaborative open CT image database to increase the sample size, the range of available scanners, and the available body districts.


Assuntos
Vértebras Lombares , Tomografia Computadorizada por Raios X , Humanos , Cadáver , Processamento de Imagem Assistida por Computador/métodos , Vértebras Lombares/diagnóstico por imagem , 60570 , Reprodutibilidade dos Testes , Tomografia Computadorizada por Raios X/métodos
3.
Sci Rep ; 14(1): 8738, 2024 04 16.
Artigo em Inglês | MEDLINE | ID: mdl-38627421

RESUMO

Brain tumor glioblastoma is a disease that is caused for a child who has abnormal cells in the brain, which is found using MRI "Magnetic Resonance Imaging" brain image using a powerful magnetic field, radio waves, and a computer to produce detailed images of the body's internal structures it is a standard diagnostic tool for a wide range of medical conditions, from detecting brain and spinal cord injuries to identifying tumors and also in evaluating joint problems. This is treatable, and by enabling the factor for happening, the factor for dissolving the dead tissues. If the brain tumor glioblastoma is untreated, the child will go to death; to avoid this, the child has to treat the brain problem using the scan of MRI images. Using the neural network, brain-related difficulties have to be resolved. It is identified to make the diagnosis of glioblastoma. This research deals with the techniques of max rationalizing and min rationalizing images, and the method of boosted division time attribute extraction has been involved in diagnosing glioblastoma. The process of maximum and min rationalization is used to recognize the Brain tumor glioblastoma in the brain images for treatment efficiency. The image segment is created for image recognition. The method of boosted division time attribute extraction is used in image recognition with the help of MRI for image extraction. The proposed boosted division time attribute extraction method helps to recognize the fetal images and find Brain tumor glioblastoma with feasible accuracy using image rationalization against the brain tumor glioblastoma diagnosis. In addition, 45% of adults are affected by the tumor, 40% of children and 5% are in death situations. To reduce this ratio, in this study, the Brain tumor glioblastoma is identified and segmented to recognize the fetal images and find the Brain tumor glioblastoma diagnosis. Then the tumor grades were analyzed using the efficient method for the imaging MRI with the diagnosis result of partially high. The accuracy of the proposed TAE-PIS system is 98.12% which is higher when compared to other methods like Genetic algorithm, Convolution neural network, fuzzy-based minimum and maximum neural network and kernel-based support vector machine respectively. Experimental results show that the proposed method archives rate of 98.12% accuracy with low response time and compared with the Genetic algorithm (GA), Convolutional Neural Network (CNN), fuzzy-based minimum and maximum neural network (Fuzzy min-max NN), and kernel-based support vector machine. Specifically, the proposed method achieves a substantial improvement of 80.82%, 82.13%, 85.61%, and 87.03% compared to GA, CNN, Fuzzy min-max NN, and kernel-based support vector machine, respectively.


Assuntos
Neoplasias Encefálicas , Glioblastoma , Adulto , Criança , Humanos , Glioblastoma/diagnóstico por imagem , Processamento de Imagem Assistida por Computador/métodos , Neoplasias Encefálicas/patologia , Encéfalo/diagnóstico por imagem , Encéfalo/patologia , Algoritmos
4.
Sci Rep ; 14(1): 8504, 2024 04 12.
Artigo em Inglês | MEDLINE | ID: mdl-38605094

RESUMO

This work aims to investigate the clinical feasibility of deep learning-based synthetic CT images for cervix cancer, comparing them to MR for calculating attenuation (MRCAT). Patient cohort with 50 pairs of T2-weighted MR and CT images from cervical cancer patients was split into 40 for training and 10 for testing phases. We conducted deformable image registration and Nyul intensity normalization for MR images to maximize the similarity between MR and CT images as a preprocessing step. The processed images were plugged into a deep learning model, generative adversarial network. To prove clinical feasibility, we assessed the accuracy of synthetic CT images in image similarity using structural similarity (SSIM) and mean-absolute-error (MAE) and dosimetry similarity using gamma passing rate (GPR). Dose calculation was performed on the true and synthetic CT images with a commercial Monte Carlo algorithm. Synthetic CT images generated by deep learning outperformed MRCAT images in image similarity by 1.5% in SSIM, and 18.5 HU in MAE. In dosimetry, the DL-based synthetic CT images achieved 98.71% and 96.39% in the GPR at 1% and 1 mm criterion with 10% and 60% cut-off values of the prescription dose, which were 0.9% and 5.1% greater GPRs over MRCAT images.


Assuntos
Aprendizado Profundo , Neoplasias do Colo do Útero , Feminino , Humanos , Neoplasias do Colo do Útero/diagnóstico por imagem , Estudos de Viabilidade , Processamento de Imagem Assistida por Computador/métodos , Imageamento por Ressonância Magnética/métodos , Tomografia Computadorizada por Raios X/métodos , Planejamento da Radioterapia Assistida por Computador/métodos
5.
PLoS One ; 19(4): e0299399, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38607987

RESUMO

In this study, we employed the principle of Relative Mode Transfer Method (RMTM) to establish a model for a single pendulum subjected to sudden changes in its length. An experimental platform for image processing was constructed to accurately track the position of a moving ball, enabling experimental verification of the pendulum's motion under specific operating conditions. The experimental data demonstrated excellent agreement with simulated numerical results, validating the effectiveness of the proposed methodology. Furthermore, we performed simulations of a double obstacle pendulum system, investigating the effects of different parameters, including obstacle pin positions, quantities, and initial release angles, on the pendulum's motion through numerical simulations. This research provides novel insights into addressing the challenges associated with abrupt and continuous changes in pendulum length.


Assuntos
Processamento de Imagem Assistida por Computador , Modalidades de Fisioterapia , Movimento (Física)
6.
Comput Biol Med ; 173: 108370, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38564854

RESUMO

The transformer architecture has achieved remarkable success in medical image analysis owing to its powerful capability for capturing long-range dependencies. However, due to the lack of intrinsic inductive bias in modeling visual structural information, the transformer generally requires a large-scale pre-training schedule, limiting the clinical applications over expensive small-scale medical data. To this end, we propose a slimmable transformer to explore intrinsic inductive bias via position information for medical image segmentation. Specifically, we empirically investigate how different position encoding strategies affect the prediction quality of the region of interest (ROI) and observe that ROIs are sensitive to different position encoding strategies. Motivated by this, we present a novel Hybrid Axial-Attention (HAA) that can be equipped with pixel-level spatial structure and relative position information as inductive bias. Moreover, we introduce a gating mechanism to achieve efficient feature selection and further improve the representation quality over small-scale datasets. Experiments on LGG and COVID-19 datasets prove the superiority of our method over the baseline and previous works. Internal workflow visualization with interpretability is conducted to validate our success better; the proposed slimmable transformer has the potential to be further developed into a visual software tool for improving computer-aided lesion diagnosis and treatment planning.


Assuntos
COVID-19 , Humanos , COVID-19/diagnóstico por imagem , Diagnóstico por Computador , Software , Fluxo de Trabalho , Processamento de Imagem Assistida por Computador
7.
Comput Biol Med ; 173: 108377, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38569233

RESUMO

Observing cortical vascular structures and functions using laser speckle contrast imaging (LSCI) at high resolution plays a crucial role in understanding cerebral pathologies. Usually, open-skull window techniques have been applied to reduce scattering of skull and enhance image quality. However, craniotomy surgeries inevitably induce inflammation, which may obstruct observations in certain scenarios. In contrast, image enhancement algorithms provide popular tools for improving the signal-to-noise ratio (SNR) of LSCI. The current methods were less than satisfactory through intact skulls because the transcranial cortical images were of poor quality. Moreover, existing algorithms do not guarantee the accuracy of dynamic blood flow mappings. In this study, we develop an unsupervised deep learning method, named Dual-Channel in Spatial-Frequency Domain CycleGAN (SF-CycleGAN), to enhance the perceptual quality of cortical blood flow imaging by LSCI. SF-CycleGAN enabled convenient, non-invasive, and effective cortical vascular structure observation and accurate dynamic blood flow mappings without craniotomy surgeries to visualize biodynamics in an undisturbed biological environment. Our experimental results showed that SF-CycleGAN achieved a SNR at least 4.13 dB higher than that of other unsupervised methods, imaged the complete vascular morphology, and enabled the functional observation of small cortical vessels. Additionally, the proposed method showed remarkable robustness and could be generalized to various imaging configurations and image modalities, including fluorescence images, without retraining.


Assuntos
Hemodinâmica , Aumento da Imagem , Aumento da Imagem/métodos , Crânio/diagnóstico por imagem , Fluxo Sanguíneo Regional/fisiologia , Cabeça , Processamento de Imagem Assistida por Computador/métodos
8.
Comput Biol Med ; 173: 108390, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38569234

RESUMO

Radiotherapy is one of the primary treatment methods for tumors, but the organ movement caused by respiration limits its accuracy. Recently, 3D imaging from a single X-ray projection has received extensive attention as a promising approach to address this issue. However, current methods can only reconstruct 3D images without directly locating the tumor and are only validated for fixed-angle imaging, which fails to fully meet the requirements of motion control in radiotherapy. In this study, a novel imaging method RT-SRTS is proposed which integrates 3D imaging and tumor segmentation into one network based on multi-task learning (MTL) and achieves real-time simultaneous 3D reconstruction and tumor segmentation from a single X-ray projection at any angle. Furthermore, the attention enhanced calibrator (AEC) and uncertain-region elaboration (URE) modules have been proposed to aid feature extraction and improve segmentation accuracy. The proposed method was evaluated on fifteen patient cases and compared with three state-of-the-art methods. It not only delivers superior 3D reconstruction but also demonstrates commendable tumor segmentation results. Simultaneous reconstruction and segmentation can be completed in approximately 70 ms, significantly faster than the required time threshold for real-time tumor tracking. The efficacies of both AEC and URE have also been validated in ablation studies. The code of work is available at https://github.com/ZywooSimple/RT-SRTS.


Assuntos
Imageamento Tridimensional , Neoplasias , Humanos , Imageamento Tridimensional/métodos , Raios X , Radiografia , Neoplasias/diagnóstico por imagem , Respiração , Processamento de Imagem Assistida por Computador/métodos
9.
Comput Biol Med ; 173: 108388, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38569235

RESUMO

The COVID-19 pandemic has resulted in hundreds of million cases and numerous deaths worldwide. Here, we develop a novel classification network CECT by controllable ensemble convolutional neural network and transformer to provide a timely and accurate COVID-19 diagnosis. The CECT is composed of a parallel convolutional encoder block, an aggregate transposed-convolutional decoder block, and a windowed attention classification block. Each block captures features at different scales from 28 × 28 to 224 × 224 from the input, composing enriched and comprehensive information. Different from existing methods, our CECT can capture features at both multi-local and global scales without any sophisticated module design. Moreover, the contribution of local features at different scales can be controlled with the proposed ensemble coefficients. We evaluate CECT on two public COVID-19 datasets and it reaches the highest accuracy of 98.1% in the intra-dataset evaluation, outperforming existing state-of-the-art methods. Moreover, the developed CECT achieves an accuracy of 90.9% on the unseen dataset in the inter-dataset evaluation, showing extraordinary generalization ability. With remarkable feature capture ability and generalization ability, we believe CECT can be extended to other medical scenarios as a powerful diagnosis tool. Code is available at https://github.com/NUS-Tim/CECT.


Assuntos
COVID-19 , Humanos , Teste para COVID-19 , Pandemias , Redes Neurais de Computação , Processamento de Imagem Assistida por Computador
10.
Comput Biol Med ; 173: 108381, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38569237

RESUMO

Multimodal medical image fusion (MMIF) technology plays a crucial role in medical diagnosis and treatment by integrating different images to obtain fusion images with comprehensive information. Deep learning-based fusion methods have demonstrated superior performance, but some of them still encounter challenges such as imbalanced retention of color and texture information and low fusion efficiency. To alleviate the above issues, this paper presents a real-time MMIF method, called a lightweight residual fusion network. First, a feature extraction framework with three branches is designed. Two independent branches are used to fully extract brightness and texture information. The fusion branch enables different modal information to be interactively fused at a shallow level, thereby better retaining brightness and texture information. Furthermore, a lightweight residual unit is designed to replace the conventional residual convolution in the model, thereby improving the fusion efficiency and reducing the overall model size by approximately 5 times. Finally, considering that the high-frequency image decomposed by the wavelet transform contains abundant edge and texture information, an adaptive strategy is proposed for assigning weights to the loss function based on the information content in the high-frequency image. This strategy effectively guides the model toward preserving intricate details. The experimental results on MRI and functional images demonstrate that the proposed method exhibits superior fusion performance and efficiency compared to alternative approaches. The code of LRFNet is available at https://github.com/HeDan-11/LRFNet.


Assuntos
Processamento de Imagem Assistida por Computador , Análise de Ondaletas
11.
Comput Biol Med ; 173: 108293, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38574528

RESUMO

Accurately identifying the Kirsten rat sarcoma virus (KRAS) gene mutation status in colorectal cancer (CRC) patients can assist doctors in deciding whether to use specific targeted drugs for treatment. Although deep learning methods are popular, they are often affected by redundant features from non-lesion areas. Moreover, existing methods commonly extract spatial features from imaging data, which neglect important frequency domain features and may degrade the performance of KRAS gene mutation status identification. To address this deficiency, we propose a segmentation-guided Transformer U-Net (SG-Transunet) model for KRAS gene mutation status identification in CRC. Integrating the strength of convolutional neural networks (CNNs) and Transformers, SG-Transunet offers a unique approach for both lesion segmentation and KRAS mutation status identification. Specifically, for precise lesion localization, we employ an encoder-decoder to obtain segmentation results and guide the KRAS gene mutation status identification task. Subsequently, a frequency domain supplement block is designed to capture frequency domain features, integrating it with high-level spatial features extracted in the encoding path to derive advanced spatial-frequency domain features. Furthermore, we introduce a pre-trained Xception block to mitigate the risk of overfitting associated with small-scale datasets. Following this, an aggregate attention module is devised to consolidate spatial-frequency domain features with global information extracted by the Transformer at shallow and deep levels, thereby enhancing feature discriminability. Finally, we propose a mutual-constrained loss function that simultaneously constrains the segmentation mask acquisition and gene status identification process. Experimental results demonstrate the superior performance of SG-Transunet over state-of-the-art methods in discriminating KRAS gene mutation status.


Assuntos
Neoplasias Colorretais , Proteínas Proto-Oncogênicas p21(ras) , Humanos , Proteínas Proto-Oncogênicas p21(ras)/genética , Sistemas de Liberação de Medicamentos , Mutação/genética , Redes Neurais de Computação , Neoplasias Colorretais/diagnóstico por imagem , Neoplasias Colorretais/genética , Processamento de Imagem Assistida por Computador
12.
Med Eng Phys ; 126: 104132, 2024 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-38621854

RESUMO

This research work explores the integration of medical and information technology, particularly focusing on the use of data analytics and deep learning techniques in medical image processing. Specifically, it addresses the diagnosis and prediction of fetal conditions, including Down Syndrome (DS), through the analysis of ultrasound images. Despite existing methods in image segmentation, feature extraction, and classification, there is a pressing need to enhance diagnostic accuracy. Our research delves into a comprehensive literature review and presents advanced methodologies, incorporating sophisticated deep learning architectures and data augmentation techniques to improve fetal diagnosis. Moreover, the study emphasizes the clinical significance of accurate diagnostics, detailing the training and validation process of the AI model, ensuring ethical considerations, and highlighting the potential of the model in real-world clinical settings. By pushing the boundaries of current diagnostic capabilities and emphasizing rigorous clinical validation, this research work aims to contribute significantly to medical imaging and pave the way for more precise and reliable fetal health assessments.


Assuntos
Síndrome de Down , Humanos , Síndrome de Down/diagnóstico por imagem , Processamento de Imagem Assistida por Computador
13.
Opt Express ; 32(7): 11934-11951, 2024 Mar 25.
Artigo em Inglês | MEDLINE | ID: mdl-38571030

RESUMO

Optical coherence tomography (OCT) can resolve biological three-dimensional tissue structures, but it is inevitably plagued by speckle noise that degrades image quality and obscures biological structure. Recently unsupervised deep learning methods are becoming more popular in OCT despeckling but they still have to use unpaired noisy-clean images or paired noisy-noisy images. To address the above problem, we propose what we believe to be a novel unsupervised deep learning method for OCT despeckling, termed Double-free Net, which eliminates the need for ground truth data and repeated scanning by sub-sampling noisy images and synthesizing noisier images. In comparison to existing unsupervised methods, Double-free Net obtains superior denoising performance when trained on datasets comprising retinal and human tissue images without clean images. The efficacy of Double-free Net in denoising holds significant promise for diagnostic applications in retinal pathologies and enhances the accuracy of retinal layer segmentation. Results demonstrate that Double-free Net outperforms state-of-the-art methods and exhibits strong convenience and adaptability across different OCT images.


Assuntos
Algoritmos , Tomografia de Coerência Óptica , Humanos , Tomografia de Coerência Óptica/métodos , Retina/diagnóstico por imagem , Cintilografia , Processamento de Imagem Assistida por Computador/métodos
14.
PLoS One ; 19(4): e0299360, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38557660

RESUMO

Ovarian cancer is a highly lethal malignancy in the field of oncology. Generally speaking, the segmentation of ovarian medical images is a necessary prerequisite for the diagnosis and treatment planning. Therefore, accurately segmenting ovarian tumors is of utmost importance. In this work, we propose a hybrid network called PMFFNet to improve the segmentation accuracy of ovarian tumors. The PMFFNet utilizes an encoder-decoder architecture. Specifically, the encoder incorporates the ViTAEv2 model to extract inter-layer multi-scale features from the feature pyramid. To address the limitation of fixed window size that hinders sufficient interaction of information, we introduce Varied-Size Window Attention (VSA) to the ViTAEv2 model to capture rich contextual information. Additionally, recognizing the significance of multi-scale features, we introduce the Multi-scale Feature Fusion Block (MFB) module. The MFB module enhances the network's capacity to learn intricate features by capturing both local and multi-scale information, thereby enabling more precise segmentation of ovarian tumors. Finally, in conjunction with our designed decoder, our model achieves outstanding performance on the MMOTU dataset. The results are highly promising, with the model achieving scores of 97.24%, 91.15%, and 87.25% in mACC, mIoU, and mDice metrics, respectively. When compared to several Unet-based and advanced models, our approach demonstrates the best segmentation performance.


Assuntos
Neoplasias Ovarianas , Feminino , Humanos , Neoplasias Ovarianas/diagnóstico por imagem , Benchmarking , Aprendizagem , Oncologia , Processamento de Imagem Assistida por Computador
15.
J Biomed Opt ; 29(4): 046001, 2024 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-38585417

RESUMO

Significance: Endoscopic screening for esophageal cancer (EC) may enable early cancer diagnosis and treatment. While optical microendoscopic technology has shown promise in improving specificity, the limited field of view (<1 mm) significantly reduces the ability to survey large areas efficiently in EC screening. Aim: To improve the efficiency of endoscopic screening, we propose a novel concept of end-expandable endoscopic optical fiber probe for larger field of visualization and for the first time evaluate a deep-learning-based image super-resolution (DL-SR) method to overcome the issue of limited sampling capability. Approach: To demonstrate feasibility of the end-expandable optical fiber probe, DL-SR was applied on simulated low-resolution microendoscopic images to generate super-resolved (SR) ones. Varying the degradation model of image data acquisition, we identified the optimal parameters for optical fiber probe prototyping. The proposed screening method was validated with a human pathology reading study. Results: For various degradation parameters considered, the DL-SR method demonstrated different levels of improvement of traditional measures of image quality. The endoscopists' interpretations of the SR images were comparable to those performed on the high-resolution ones. Conclusions: This work suggests avenues for development of DL-SR-enabled sparse image reconstruction to improve high-yield EC screening and similar clinical applications.


Assuntos
Esôfago de Barrett , Aprendizado Profundo , Neoplasias Esofágicas , Humanos , Fibras Ópticas , Neoplasias Esofágicas/diagnóstico por imagem , Esôfago de Barrett/patologia , Processamento de Imagem Assistida por Computador
16.
Sci Rep ; 14(1): 8253, 2024 04 08.
Artigo em Inglês | MEDLINE | ID: mdl-38589478

RESUMO

This work presents a deep learning approach for rapid and accurate muscle water T2 with subject-specific fat T2 calibration using multi-spin-echo acquisitions. This method addresses the computational limitations of conventional bi-component Extended Phase Graph fitting methods (nonlinear-least-squares and dictionary-based) by leveraging fully connected neural networks for fast processing with minimal computational resources. We validated the approach through in vivo experiments using two different MRI vendors. The results showed strong agreement of our deep learning approach with reference methods, summarized by Lin's concordance correlation coefficients ranging from 0.89 to 0.97. Further, the deep learning method achieved a significant computational time improvement, processing data 116 and 33 times faster than the nonlinear least squares and dictionary methods, respectively. In conclusion, the proposed approach demonstrated significant time and resource efficiency improvements over conventional methods while maintaining similar accuracy. This methodology makes the processing of water T2 data faster and easier for the user and will facilitate the utilization of the use of a quantitative water T2 map of muscle in clinical and research studies.


Assuntos
Algoritmos , Aprendizado Profundo , Água , Calibragem , Imageamento por Ressonância Magnética/métodos , Músculos/diagnóstico por imagem , Imagens de Fantasmas , Processamento de Imagem Assistida por Computador/métodos , Encéfalo
17.
BMC Med Imaging ; 24(1): 83, 2024 Apr 08.
Artigo em Inglês | MEDLINE | ID: mdl-38589793

RESUMO

The research focuses on the segmentation and classification of leukocytes, a crucial task in medical image analysis for diagnosing various diseases. The leukocyte dataset comprises four classes of images such as monocytes, lymphocytes, eosinophils, and neutrophils. Leukocyte segmentation is achieved through image processing techniques, including background subtraction, noise removal, and contouring. To get isolated leukocytes, background mask creation, Erythrocytes mask creation, and Leukocytes mask creation are performed on the blood cell images. Isolated leukocytes are then subjected to data augmentation including brightness and contrast adjustment, flipping, and random shearing, to improve the generalizability of the CNN model. A deep Convolutional Neural Network (CNN) model is employed on augmented dataset for effective feature extraction and classification. The deep CNN model consists of four convolutional blocks having eleven convolutional layers, eight batch normalization layers, eight Rectified Linear Unit (ReLU) layers, and four dropout layers to capture increasingly complex patterns. For this research, a publicly available dataset from Kaggle consisting of a total of 12,444 images of four types of leukocytes was used to conduct the experiments. Results showcase the robustness of the proposed framework, achieving impressive performance metrics with an accuracy of 97.98% and precision of 97.97%. These outcomes affirm the efficacy of the devised segmentation and classification approach in accurately identifying and categorizing leukocytes. The combination of advanced CNN architecture and meticulous pre-processing steps establishes a foundation for future developments in the field of medical image analysis.


Assuntos
Aprendizado Profundo , Humanos , Curadoria de Dados , Leucócitos , Redes Neurais de Computação , Células Sanguíneas , Processamento de Imagem Assistida por Computador/métodos
18.
PLoS One ; 19(4): e0299099, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38564618

RESUMO

Individual muscle segmentation is the process of partitioning medical images into regions representing each muscle. It can be used to isolate spatially structured quantitative muscle characteristics, such as volume, geometry, and the level of fat infiltration. These features are pivotal to measuring the state of muscle functional health and in tracking the response of the body to musculoskeletal and neuromusculoskeletal disorders. The gold standard approach to perform muscle segmentation requires manual processing of large numbers of images and is associated with significant operator repeatability issues and high time requirements. Deep learning-based techniques have been recently suggested to be capable of automating the process, which would catalyse research into the effects of musculoskeletal disorders on the muscular system. In this study, three convolutional neural networks were explored in their capacity to automatically segment twenty-three lower limb muscles from the hips, thigh, and calves from magnetic resonance images. The three neural networks (UNet, Attention UNet, and a novel Spatial Channel UNet) were trained independently with augmented images to segment 6 subjects and were able to segment the muscles with an average Relative Volume Error (RVE) between -8.6% and 2.9%, average Dice Similarity Coefficient (DSC) between 0.70 and 0.84, and average Hausdorff Distance (HD) between 12.2 and 46.5 mm, with performance dependent on both the subject and the network used. The trained convolutional neural networks designed, and data used in this study are openly available for use, either through re-training for other medical images, or application to automatically segment new T1-weighted lower limb magnetic resonance images captured with similar acquisition parameters.


Assuntos
Aprendizado Profundo , Humanos , Feminino , Animais , Bovinos , Processamento de Imagem Assistida por Computador/métodos , Pós-Menopausa , Coxa da Perna/diagnóstico por imagem , Músculos , Imageamento por Ressonância Magnética/métodos
19.
Biomed Eng Online ; 23(1): 39, 2024 Apr 02.
Artigo em Inglês | MEDLINE | ID: mdl-38566181

RESUMO

BACKGROUND: Congenital heart disease (CHD) is one of the most common birth defects in the world. It is the leading cause of infant mortality, necessitating an early diagnosis for timely intervention. Prenatal screening using ultrasound is the primary method for CHD detection. However, its effectiveness is heavily reliant on the expertise of physicians, leading to subjective interpretations and potential underdiagnosis. Therefore, a method for automatic analysis of fetal cardiac ultrasound images is highly desired to assist an objective and effective CHD diagnosis. METHOD: In this study, we propose a deep learning-based framework for the identification and segmentation of the three vessels-the pulmonary artery, aorta, and superior vena cava-in the ultrasound three vessel view (3VV) of the fetal heart. In the first stage of the framework, the object detection model Yolov5 is employed to identify the three vessels and localize the Region of Interest (ROI) within the original full-sized ultrasound images. Subsequently, a modified Deeplabv3 equipped with our novel AMFF (Attentional Multi-scale Feature Fusion) module is applied in the second stage to segment the three vessels within the cropped ROI images. RESULTS: We evaluated our method with a dataset consisting of 511 fetal heart 3VV images. Compared to existing models, our framework exhibits superior performance in the segmentation of all the three vessels, demonstrating the Dice coefficients of 85.55%, 89.12%, and 77.54% for PA, Ao and SVC respectively. CONCLUSIONS: Our experimental results show that our proposed framework can automatically and accurately detect and segment the three vessels in fetal heart 3VV images. This method has the potential to assist sonographers in enhancing the precision of vessel assessment during fetal heart examinations.


Assuntos
Aprendizado Profundo , Gravidez , Feminino , Humanos , Veia Cava Superior , Ultrassonografia , Ultrassonografia Pré-Natal/métodos , Coração Fetal/diagnóstico por imagem , Processamento de Imagem Assistida por Computador/métodos
20.
PLoS One ; 19(4): e0298287, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38593135

RESUMO

Cryo-electron micrograph images have various characteristics such as varying sizes, shapes, and distribution densities of individual particles, severe background noise, high levels of impurities, irregular shapes, blurred edges, and similar color to the background. How to demonstrate good adaptability in the field of image vision by picking up single particles from multiple types of cryo-electron micrographs is currently a challenge in the field of cryo-electron micrographs. This paper combines the characteristics of the MixUp hybrid enhancement algorithm, enhances the image feature information in the pre-processing stage, builds a feature perception network based on the channel self-attention mechanism in the forward network of the Swin Transformer model network, achieving adaptive adjustment of self-attention mechanism between different single particles, increasing the network's tolerance to noise, Incorporating PReLU activation function to enhance information exchange between pixel blocks of different single particles, and combining the Cross-Entropy function with the softmax function to construct a classification network based on Swin Transformer suitable for cryo-electron micrograph single particle detection model (Swin-cryoEM), achieving mixed detection of multiple types of single particles. Swin-cryoEM algorithm can better solve the problem of good adaptability in picking single particles of many types of cryo-electron micrographs, improve the accuracy and generalization ability of the single particle picking method, and provide high-quality data support for the three-dimensional reconstruction of a single particle. In this paper, ablation experiments and comparison experiments were designed to evaluate and compare Swin-cryoEM algorithms in detail and comprehensively on multiple datasets. The Average Precision is an important evaluation index of the evaluation model, and the optimal Average Precision reached 95.5% in the training stage Swin-cryoEM, and the single particle picking performance was also superior in the prediction stage. This model inherits the advantages of the Swin Transformer detection model and is superior to mainstream models such as Faster R-CNN and YOLOv5 in terms of the single particle detection capability of cryo-electron micrographs.


Assuntos
Algoritmos , Elétrons , Microscopia Crioeletrônica/métodos , Processamento de Imagem Assistida por Computador/métodos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...